286 research outputs found

    `He Forgot' : Young Children's Use of Cognitive Explanations for Another Person's Mistakes

    Get PDF
    Children, ages 4 and 5 years, and adults were asked (a) to explain a story character's incorrect search for a desired object, and (b) to explain the source of the character's ignorance or false belief concerning the object's true location. The character either (a) did not receive information about the object's location, (b) received information about the object's original location, but not about a subsequent change of location, (c) received information but searched for the object after a delay, or (d) received information about the object's location, but was engaged in another activity when the information was presented. With increased age, there was an increase in explanations that referred to perceptual experience or cognitive activities as the source of the character's ignorance or false belief. By age 5 years, children shifted between explanations that referred to perceptual experience or to the cognitive activities of forgetting or attentional focus, depending upon the circumstances in which the incorrect search occurred. During the late preschool years a conception of cognitive activities as contributing to knowledge and belief becomes integrated into children's conceptual framework for explaining human action

    The evolution and development of visual perspective taking

    Get PDF
    I outline three conceptions of seeing that a creature might possess: ‘the headlamp conception,’ which involves an understanding of the causal connections between gazing at an object, certain mental states, and behavior; ‘the stage lights conception,’ which involves an understanding of the selective nature of visual attention; and seeing-as. I argue that infants and various nonhumans possess the headlamp conception. There is also evidence that chimpanzees and 3-year-old children have some grasp of seeing-as. However, due to a dearth of studies, there is no evidence that infants or nonhumans possess the stage lights conception of seeing. I outline the kinds of experiments that are needed, and what we stand to learn about the evolution and development of perspective taking

    Children's suggestibility in relation to their understanding about sources of knowledge

    Get PDF
    In the experiments reported here, children chose either to maintain their initial belief about an object's identity or to accept the experimenter's contradicting suggestion. Both 3– to 4–year–olds and 4– to 5–year–olds were good at accepting the suggestion only when the experimenter was better informed than they were (implicit source monitoring). They were less accurate at recalling both their own and the experimenter's information access (explicit recall of experience), though they performed well above chance. Children were least accurate at reporting whether their final belief was based on what they were told or on what they experienced directly (explicit source monitoring). Contrasting results emerged when children decided between contradictory suggestions from two differentially informed adults: Three– to 4–year–olds were more accurate at reporting the knowledge source of the adult they believed than at deciding which suggestion was reliable. Decision making in this observation task may require reflective understanding akin to that required for explicit source judgments when the child participates in the task

    Intrinsic gain modulation and adaptive neural coding

    Get PDF
    In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity.Comment: 24 pages, 4 figures, 1 supporting informatio

    Stimulus-dependent maximum entropy models of neural population codes

    Get PDF
    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. To be able to infer a model for this distribution from large-scale neural recordings, we introduce a stimulus-dependent maximum entropy (SDME) model---a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. The model is able to capture the single-cell response properties as well as the correlations in neural spiking due to shared stimulus and due to effective neuron-to-neuron connections. Here we show that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. As a result, the SDME model gives a more accurate account of single cell responses and in particular outperforms uncoupled models in reproducing the distributions of codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like surprise and information transmission in a neural population.Comment: 11 pages, 7 figure

    Receptive Field Inference with Localized Priors

    Get PDF
    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets

    Functional Clustering Drives Encoding Improvement in a Developing Brain Network during Awake Visual Learning

    Get PDF
    Sensory experience drives dramatic structural and functional plasticity in developing neurons. However, for single-neuron plasticity to optimally improve whole-network encoding of sensory information, changes must be coordinated between neurons to ensure a full range of stimuli is efficiently represented. Using two-photon calcium imaging to monitor evoked activity in over 100 neurons simultaneously, we investigate network-level changes in the developing Xenopus laevis tectum during visual training with motion stimuli. Training causes stimulus-specific changes in neuronal responses and interactions, resulting in improved population encoding. This plasticity is spatially structured, increasing tuning curve similarity and interactions among nearby neurons, and decreasing interactions among distant neurons. Training does not improve encoding by single clusters of similarly responding neurons, but improves encoding across clusters, indicating coordinated plasticity across the network. NMDA receptor blockade prevents coordinated plasticity, reduces clustering, and abolishes whole-network encoding improvement. We conclude that NMDA receptors support experience-dependent network self-organization, allowing efficient population coding of a diverse range of stimuli.Canadian Institutes of Health Researc

    From Spiking Neuron Models to Linear-Nonlinear Models

    Get PDF
    Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates

    Differing views - can chimpanzees do level 2 perspective-taking?

    Get PDF
    We gratefully acknowledge financial support by the German National Academic Foundation.Although chimpanzees understand what others may see, it is unclear if they understand how others see things (Level 2 perspective-taking). We investigated whether chimpanzees can predict the behavior of a conspecific which is holding a mistaken perspective that differs from their own. The subject competed with a conspecific over two food sticks. While the subject could see that both were the same size, to the competitor one appeared bigger than the other. In a previously established game, the competitor chose one stick in private first and the subject chose thereafter, without knowing which of the sticks was gone. Chimpanzees and 6-year-old children chose the ‘riskier’ stick (that looked bigger to the competitor) significantly less in the game than in a nonsocial control. Children chose randomly in the control, thus showing Level 2 perspective-taking skills; in contrast, chimpanzees had a preference for the ‘riskier’ stick here, rendering it possible that they attributed their own preference to the competitor to predict her choice. We thus run a follow-up in which chimpanzees did not have a preference in the control. Now they also chose randomly in the game. We conclude that chimpanzees solved the task by attributing their own preference to the other, while children truly understood the other’s mistaken perspective.Publisher PDFPeer reviewe

    Modeling convergent ON and OFF pathways in the early visual system

    Get PDF
    For understanding the computation and function of single neurons in sensory systems, one needs to investigate how sensory stimuli are related to a neuron’s response and which biological mechanisms underlie this relationship. Mathematical models of the stimulus–response relationship have proved very useful in approaching these issues in a systematic, quantitative way. A starting point for many such analyses has been provided by phenomenological “linear–nonlinear” (LN) models, which comprise a linear filter followed by a static nonlinear transformation. The linear filter is often associated with the neuron’s receptive field. However, the structure of the receptive field is generally a result of inputs from many presynaptic neurons, which may form parallel signal processing pathways. In the retina, for example, certain ganglion cells receive excitatory inputs from ON-type as well as OFF-type bipolar cells. Recent experiments have shown that the convergence of these pathways leads to intriguing response characteristics that cannot be captured by a single linear filter. One approach to adjust the LN model to the biological circuit structure is to use multiple parallel filters that capture ON and OFF bipolar inputs. Here, we review these new developments in modeling neuronal responses in the early visual system and provide details about one particular technique for obtaining the required sets of parallel filters from experimental data
    corecore